Comparative Analysis of SDG Implementation Evolution Worldwide
Author
Lodrik Adam, Sofia Benczédi, Stefan Favre, Delia Fuchs
Published
November 19, 2023
Introduction
Warning: le package 'here' a été compilé avec la version R 4.3.2
Warning: le package 'ggplot2' a été compilé avec la version R 4.3.2
Warning: le package 'dplyr' a été compilé avec la version R 4.3.2
Warning: le package 'forecast' a été compilé avec la version R 4.3.2
Warning: le package 'cowplot' a été compilé avec la version R 4.3.2
Warning: le package 'sf' a été compilé avec la version R 4.3.2
Warning: le package 'rnaturalearth' a été compilé avec la version R
4.3.2
Warning: le package 'knitr' a été compilé avec la version R 4.3.2
Warning: le package 'kableExtra' a été compilé avec la version R
4.3.2
Warning: le package 'DT' a été compilé avec la version R 4.3.2
Overview and Motivation
The global significance of the SDGs is our basis. The adoption of the SDGs by the United Nation in 2015 marked a significant global commitment to address pressing issues such as poverty, inequality, climate, change, and more. The fact that these goals were unanimously adopted by 193 member states underscores their importance. This prompted us to ask ourselves, can we evaluate the progress? What has really been done so far? Although the SDGs have attracted considerable attention and backing, it is essential to evaluate the events preceding and following their implementation. Understanding the actions taken and progress made is essential in determining if these global commitments are resulting in tangible enhancements to individuals’ lives. By examining the evolution of all countries and their respective contributions towards achieving the SDGs, we can develop a comprehensive understanding of collective efforts and identify potential disparities or gaps.
Related Work
Research questions
Focus on factors: What can explain the state of the countries regarding sustainable development? (we will analyse different factors: scores from the human freedom index, GDP per capita, military expenditures in % of GDP/government expenditure, unemployment rate, internet usage). See data description for more precise information about the factors.
Focus on time: How has the adoption of the SDGs in 2015 influenced the achievement of SDGs? (we want to compare the achievement (SDG scores: there are scores calculated even before the adoption) of the different countries before and after 2015 to see if the adoption of SDG gave a real “push” to sustainable development)
Focus on events: Is the evolution in sustainable development influenced by uncontrollable events, such as economic crisis, health crises and natural disasters? (we will analyse the impact of the COVID, natural disasters and conflicts (# deaths, damages, etc.) on the SDG scores). See data description for more precise information about how the impact of these events are materialized into data.
Focus on relationship between SDGs: How are the different SDGs linked? (We want to see if some SDGs are linked in the fact that a high score on one implies a high score on the other, and thus if we can make groups of SDGs that are comparable in that way).
Data
Sources
We are collecting our Data from the sustainability development report (SDG), the international labour organization (ILOSTAT), the World Bank, Our world in data, the CATO institute, one from Kaggle (disasters: we couldn’t find relevant accessible information from somewhere else) and GitHub. We found different datasets containing useful information in relation with the SDGs. The details about these data and the links are presented in the next question.
During the wrangling process: we add data to our table (D1_1_SDG) based on different other datasets and match them based on the country, the country code, and the year. The table below shows all our 9 databases that we merge to have our final table for the analysis, as well as each variable of interest that we keep.
Table
Name
Variable
Name
Explanation
# obs before cleaning
# obs
after cleaning
In all database
code
Country code (ISO)
country
Name of the country
year
Year of the observation (2000-2022)
D1_1_
SDG
overallscore
Overall score on all 17 SDGs (the score are % of achievement of the goals determined by the UN based on several indicators)
4140 observations of 120 variables
3818 observations of 21 variables
goal1:goal17
Score on each SDG except SDG 14 (16 variables)
population
Number of people living in the country
D2_2_
Unemployment_rate
unemployment.rate
Unemployment rate (% of the population 15 years old and older)
82800 observations of 8 variables
571 observations of 5 variables
D3_0_
GDP_per_capita
GDPpercapita
GDP per capita
266 observations of 68 variables
3818 observations of 4 variables
D3_1_
Military_expenditure_
percent_GDP
MilitaryExpenditure
PercentGDP
Military expenditures in percentage of GDP
266 observations of 68 variables
3818 observations of 4 variables
D3_2_
Military_expenditure_
percent_gov_exp
MilitaryExpenditure
PercentGovExp
Military expenditures in percentage of government expenditures
266 observations of 68 variables
3818 observations of 4 variables
D4_0_
Internet_usage
internet.usage
Internet usage (% of the population)
6570 observations of 4 variables
3433 observations of 4 variables
D5_0_
Human_freedom_index
region
Part of the world, group of countries (e.g. Eastern Europe, Dub-Saharan Africa, South Asia, etc.)
3465 observations of 141 variables
3339 observations of 18 variables
hf_score
Human Freedom score = mean of personal freedom (PF) and economic freedom (EF).
pf_law
Rule of law, mean score of:
Procedural justice
Civil justice
Criminal justice
Rule of law (V-Dem)
pf_security
Security and safety, mean score of:
Homicide
Disappearances conflicts,
terrorism
pf_movement
Freedom of movement (V-Dem)
Freedom of movement (CLD)
pf_religion
Freedom of religion
Religious organizatio repression
pf_assembly
Civil society entry and exit
Freedom of assembly
Freedom to form/run political parties
Civil society repression
pf_expression
Direct attacks on the press
Media and expression (V-Dem)
Media and expression (Freedom House)
Media and expression (BTI)
Media and expression (CLD)
pf_identity
Same-sex relationships
Divorce
Inheritance rights
Female genital mutilation
pf_score
Mean of every PF component score
ef_government
Government consumption
Transfers and subsidies
Government investment
Top marginal tax rate
State ownership of assets
ef_legal
Judicial independence
Impartial courts
Protection of property rights
Military interference Integrity of the legal system Legal enforcementof contracts
Regulatory costs
Reliability of police
ef_money
Money growth
Standard deviation of inflation
Inflation: Most recent year
Freedom to own foreign currency
ef_trade
Tariffs
Regulatory trade barriers
Black-market exchange rates
Movement of capital and people
ef_regulation
Credit market regulations
Labor market regulations
Business regulations
ef_score
Mean of every EF component score
D6_0_
Disaters
continent
Continents touched by the disasters such as floods, ouragan
14644 observations of 47 variables
2435 observations
of 10 variables
total_deaths
Number of total deaths caused by the disasters
no_injured
Number of injured people
no_affected
Number of affected people
no_homeless
Number of people that lost their home and are now homeless
total_affected
Sum of people affected (sum of the variables: no_injured, no-affected, no_homeless)
total_damages
Total of infrastructure damages
D7_0_
COVID
deaths_per_million
Number of people dead due to COVID
349966 observations of 67 variables
501 observations (only between 2020-2022, before no COVID) of 6 variables
cases_per_million
Number of COVID cases
stringency
Government Response Stringency Index: composite measure based on 9 response indicators including school closures, workplace closures, and trave
D8_0_
Conflicts
ongoing
Variable coded 1 for more than 25 deaths in intrastate conflict and 0 otherwise according to UCDP/PRIO Armed Conflict Dataset 17.1.
5016 observations of 18 variables
2782 observations of 8 variables
sum_deaths
Best estimate of deaths in all categories of violence (non-state, one-sided and state-based) recorded by the Uppsala Conflict Data Program in the country based on the UCDP GED dataset (unpublished 2016 data). The location of these events is used for estimating the extent of violence.
pop_affected
Share of population affected by violence in percentage (0 to 100) measured as described above based on population data from CIESIN, the PRIO-GRID structure as well as UCDP GED.
area_affected
Area affected by conflict
maxintensity
Two different intensity levels are coded: minor armed conflicts (1) and wars (2), Takes the max intensity of conflict in the country so that it is coded 2 if there is at least one war (>=1000 deaths in intrastate conflict) during the year. Data from UCDP/PRIO Armed Conflict Dataset 17.1.
Wrangling/cleaning
To accommodate the large scale of the datasets we intended to utilize, we decided to pre-clean each of our datasets before merging them. This allowed us to simplify the process of cleaning our final dataset afterwards.
1. Dataset on SDG
This is our main dataset, that we clean in order to keep the columns containing the following information: country name, country code, year, population, overall score and the SDGs scores.
We begin by importing the data and transforming it into a dataframe. We rename the columns and transform the scores into numeric variables.
Seeing that population has a lot of NAs, we investigate and find out that it is normal to have missing values, because some of the observations are not countries but regions, so we can drop these observations.
We see that there are only NAs in 3 SDG scores: 1, 10 and 14 and that when there are NAs for a country, it is on all years or none. We decide to run ore investigations of those 3 SDG scores to decide if we keep them or not for the analysis.
For goal 1, there are only 9.04% missing values in 15 different countries. Goal 1 being “end poverty”, we decide to keep it and only remove the countries with no information for the analysis.
For goal 10, there are only 10.2% missing values in 17 different countries. Goal 10 being “reduced inequalities”, we decide to keep it and only remove the countries with no information for the analysis.
For goal 14, there are 24.1% missing values in 40 different countries. Goal 14 being “life under water”, we decide not to keep it, because other SDG such as “life on earth” and “clean water” already treat similar subjects.
We will be working with different datasets and merge them based on the country code and the year. To make sure the match will work well, we verify that the country names are encoded in UTF-8 format, then we standardize the name of the countries (we needed to make a custom matrch for Turkey) and the country codes using the countrycode library. In addition, we create a list of all the country codes contained in the main database in order to filter the other databases. Finally, we complete the database to make sure all the combinations of “country, year” are in the database. The number of rows isn’t changed.
Finally, we complete database to make sure there are not couples of (year, code) missing.
Here are the first few lines of the cleaned dataset on SDG achievement scores:
As said, this is now our main dataset. All subsequent datasets will be merged with this dataset. Therefore, for all the following datasets, we want to make sure that we only keep data for the same countries and years as in this dataset. We have a total of 166 countries and the years range from 2000 to 2022.
2. Dataset on Unemployment rate
In this dataset, the initial step involves importing the data. Next, we ensure that the names and codes of the countries are formatted in UTF-8, preventing any discrepancies due to mismatches in country names. Following this, we modify the column names and filter the data to include only the relevant countries and years, specifically the years 2000 to 2022, covering 166 countries from our primary dataset.
Here are the first few lines of the cleaned dataset on Unemployment rate:
3. Dataset on GDP military Expenditures
We have three different databases which contain information on each countries over the years. Each year represent one variable. We want to extract three variables for our analysis: GDP per capita, military expenditures in percentage of the GDP and military expenditures in percentage of government expenditures.
After importing the data, we fill in the missing country codes using the column Indicator.Name, because we realized after some manipulations, that some of the country codes were false, but the next column contained the right ones.
fill_code <-function(data){ data <- data %>%mutate(Country.Code =ifelse(!grepl("^[A-Z]{3}$", Country.Code), Indicator.Name, Country.Code))}
We create a set of functions that we will apply to each database. First, remove the variables that we don’t need, which are the years before 2000. Second, make sure that the values are numeric and rename the year variables (because they all had an “X” before year number). Third, transform the database from wide to long, in order to match the main database. Fourth, transform the year variable into an integer variable and rearrange and rename the columns to match the ones of the other databases. Then, we apply these transformations to the three databases.
remove <-function(data){ years <-seq(1960, 1999) removeyears <-paste("X", years, sep ="") data <- data[, !(names(data) %in%c("Indicator.Name", "Indicator.Code", "X", removeyears))]}makenum <-function(data) {for (i in2000:2022) { year <-paste("X", i, sep ="") data[[year]] <-as.numeric(data[[year]]) }return(data)}renameyear <-function(data) {for (i in2000:2022) { varname <-paste("X", i, sep ="")names(data)[names(data) == varname] <-gsub("X", "", varname) }return(data)}wide2long <-function(data) { data <-pivot_longer(data, cols =-c("Country.Name", "Country.Code"), names_to ="year", values_to ="data")return(data)}yearint <-function(data) { data$year <-as.integer(data$year)return(data)}nameorder <-function(data) {colnames(data) <-c("country", "code", "year", "data") data <- data %>%select(c("code", "country", "year", "data"))}cleanwide2long <-function(data){ data <-fill_code(data) data <-remove(data) data <-makenum(data) data <-renameyear(data) data <-wide2long(data) data <-yearint(data) data <-nameorder(data)}GDPpercapita <-cleanwide2long(GDPpercapita)MilitaryExpenditurePercentGDP <-cleanwide2long(MilitaryExpenditurePercentGDP)MiliratyExpenditurePercentGovExp <-cleanwide2long(MiliratyExpenditurePercentGovExp)
We rename the colums with the main information, standardize the country code and remove the countries that are not in our main database. We see that all the 166 countries are there.
There were only 157 countries that were both in the main SDG dataset and in these 3 datasets, but we suspected that some of the missing countries were in the database but not rightly matched. Indeed, Bahamas was in the database but instead of the code “BHS” there was “The”, for “COD” it was “Dem. Rep.”, for “COG” it was “Rep”, etc. We remarked that the code is in another column of the initial database: “Indicator.Name”. We went back to the initial database and before cleaning it we put the right codes (as seen above) and after rerunning the code we see that we have all our 166 countries from the initial dataset.
We run a first round of investigation of the missing values and find that we have 16.4% for MiliratyExpenditurePercentGovExp, 12.9% for MilitaryExpenditurePercentGDP and 1.31% for GDPpercapita.
For GDPpercapita, only two countries (SOM and SSD) have a lot of missing values and in total 11 countries countries have missing values.
GDPpercapita1 <- GDPpercapita %>%group_by(code) %>%summarize(NaGDP =mean(is.na(GDPpercapita))) %>%filter(NaGDP !=0)print(GDPpercapita1, n =180)#> # A tibble: 11 x 2#> code NaGDP#> <chr> <dbl>#> 1 AFG 0.130 #> 2 BTN 0.0435#> 3 CUB 0.0870#> 4 LBN 0.0435#> 5 SOM 0.565 #> 6 SSD 0.652 #> 7 STP 0.0435#> 8 SYR 0.0870#> 9 TKM 0.0870#> 10 VEN 0.304 #> 11 YEM 0.130
We plot the evolution of GDPpercapita avec the years for each country containing missing values and distinguish the percentage of missing values with colors.
filtered_data_GDP <- GDPpercapita %>%filter(code %in% GDPpercapita1$code) # countries with NAsfiltered_data_GDP <- filtered_data_GDP %>%group_by(code) %>%mutate(PercentageMissing =mean(is.na(GDPpercapita))) %>%# column % NAsungroup()Evol_Missing_GDP <-ggplot(data = filtered_data_GDP) +geom_point(aes(x = year, y = GDPpercapita, color =cut(PercentageMissing,breaks =c(0, 0.1, 0.2, 0.3, 1),labels =c("0-10%", "10-20%", "20-30%", "30-100%")))) +labs(title ="Evolution of GDP per capita over time", x ="Year", y ="GDP per capita") +scale_color_manual(values =c("0-10%"="blue", "10-20%"="green", "20-30%"="red", "30-100%"="black"),labels =c("0-10%", "10-20%", "20-30%", "30-100%")) +guides(color =guide_legend(title ="% missings")) +facet_wrap(~ code, nrow =4)print(Evol_Missing_GDP)
For the countries with less than 30% of missing values and a linear evolution in time, we fill the missing values using linear interpolation.
list_code <-c("AFG", "BTN", "CUB", "STP", "TKM")for (i in list_code) { country_data <- GDPpercapita %>%filter(code == i) interpolated_data <-na.interp(country_data$GDPpercapita) GDPpercapita[GDPpercapita$code == i, "GDPpercapita"] <- interpolated_data}
Military expenditures in percentage of GDP
For MilitaryExpenditurePercentGDP, 12 countries have 100% of missing values. We further investigate and keep them for now, knowing that some of these coutries may also have many missing values in the other databases when wee merge everything and will be dropped later.
We plot the evolution of MilitaryExpenditurePercentGDP along the years for each country containing missing values and distinguish the percentage of missing values with colors.
filtered_data_Mil1 <- MilitaryExpenditurePercentGDP %>%filter(code %in% MilitaryExpenditurePercentGDP1$code) # countries with NAsfiltered_data_Mil1 <- filtered_data_Mil1 %>%group_by(code) %>%mutate(PercentageMissing =mean(is.na(MilitaryExpenditurePercentGDP))) %>%# Column % NAsungroup()Evol_Missing_Mil1 <-ggplot(data = filtered_data_Mil1) +geom_line(aes(x = year, y = MilitaryExpenditurePercentGDP, color =cut(PercentageMissing,breaks =c(0, 0.1, 0.2, 0.3, 1),labels =c("0-10%", "10-20%", "20-30%", "30-100%")))) +labs(title ="Military expenditure in % of GDP over time", x ="Years from 2000 to 2022", y ="GDP per capita") +scale_color_manual(values =c("0-10%"="blue", "10-20%"="green", "20-30%"="red", "30-100%"="black"),labels =c("0-10%", "10-20%", "20-30%", "50-100%")) +guides(color =guide_legend(title ="% missings")) +facet_wrap(~ code, nrow =5) +theme(strip.text =element_text(size =6)) +scale_x_continuous(breaks =NULL) +scale_y_continuous(breaks =NULL)print(Evol_Missing_Mil1)
For the countries with less than 30% of missing values and a linear evolution in time, we fill the missing values using linear interpolation.
Military expenditures in percentage of governement expenditures
For MilitaryExpenditurePercentGovExp, 17 countries have 100% of missing values. We further investigate and keep them for now, knowing that some of these coutries may also have many missing values in the other databases when wee merge everything and will be dropped later.
We plot the evolution of MilitaryExpenditurePercentGovExp along the years for each country containing missing values and distinguish the percentage of missing values with colors.
filtered_data_Mil2 <- MiliratyExpenditurePercentGovExp %>%filter(code %in% MiliratyExpenditurePercentGovExp1$code) # Countries with NAsfiltered_data_Mil2 <- filtered_data_Mil2 %>%group_by(code) %>%mutate(PercentageMissing =mean(is.na(MiliratyExpenditurePercentGovExp))) %>%# Column % NAsungroup()Evol_Missing_Mil2 <-ggplot(data = filtered_data_Mil2) +geom_line(aes(x = year, y = MiliratyExpenditurePercentGovExp, color =cut(PercentageMissing,breaks =c(0, 0.1, 0.2, 0.3, 1),labels =c("0-10%", "10-20%", "20-30%", "30-100%")))) +labs(title ="Military expenditure in % of government expenditures over time", x ="Year from 2000 to 2022", y ="GDP per capita") +scale_color_manual(values =c("0-10%"="blue", "10-20%"="green", "20-30%"="red", "30-100%"="black"),labels =c("0-10%", "10-20%", "20-30%", "50-100%")) +guides(color =guide_legend(title ="% missings")) +facet_wrap(~ code, nrow =5) +theme(strip.text =element_text(size =6)) +scale_x_continuous(breaks =NULL) +scale_y_continuous(breaks =NULL)print(Evol_Missing_Mil2)
For the countries with less than 30% of missing values and a linear evolution in time, we fill the missing values using linear interpolation.
We now look again at the percentage of missing values for the trhee databases: 14.49% for MiliratyExpenditurePercentGovExp, 11.6% for MilitaryExpenditurePercentGDP and 1.07% for GDPpercapita
Here are the first few lines of the cleaned dataset of GDP per capita:
Here are the first few lines of the cleaned dataset of military expenditures in percentage of GDP:
Here are the first few lines of the cleaned dataset of military expenditures in percentage of government expenditures:
4. Dataset on internet usage
To prepare the dataset on internet usage in the world to be merge with the other data, we first, import the data. Then, we keep only the year that we are interested in (2000 to 2022). We also rename the column and keep only the country that match the list of the countries in the main dataset on the SDG.
Here are the first few lines of the cleaned dataset of internet usage:
5. Dataset on human freedom index
After importing the data from the CATO Institute website, we noticed that even if the file was called “Human Freedom Index 2022”, the available observations were only going from 2000 up to 2020. We have decided first to modify it in order to match our other datasets, by renaming/encoding/standardizing the columns containing the country names.
data <-read.csv(here("scripts", "data", "human-freedom-index-2022.csv"))#data in tibble datatibble <-tibble(data)# Rename the column countries into country to match the other datbasesnames(datatibble)[names(datatibble) =="countries"] <-"country"# Make sure the encoding of the country names are UTF-8datatibble$country <-iconv(datatibble$country, to ="UTF-8", sub ="byte")# standardize country namesdatatibble <- datatibble %>%mutate(country =countrycode(country, "country.name", "country.name"))
Once done, we could verify which countries were or were not present between these observations and our main SDG dataset. We have decided to keep the ones that were matching between the two datasets.
# Merge by country namedatatibble <- datatibble %>%left_join(D1_0_SDG_country_list, by ="country")datatibble <- datatibble %>%filter(code %in% list_country)(length(unique(datatibble$code)))#> [1] 159# See which ones are missinglist_country_free <-c(unique(datatibble$code))(missing <-setdiff(list_country, list_country_free))#> [1] "AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB"# Turkey was missing but present in the initial database (it was a problem when stadardizing the country names of D1_0SDG_country_list that we corrected) and the other missing countries are:"AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB" D5_0_Human_freedom_index <- datatibble
Then, we noticed that there were a lot of columns that were not important for us, as we had 141 variables taken into account. So we have decided to keep the ones that refers to the countries informations (such as code, year, ..) and their human freedom scores per category (pf for personnal freedom, ef for economical freedom).
# erasing useless columns to keep only the general ones. D5_0_Human_freedom_index <-select(D5_0_Human_freedom_index, year, country, region, hf_score, pf_rol, pf_ss, pf_movement, pf_religion, pf_assembly, pf_expression, pf_identity, pf_score, ef_government, ef_legal, ef_money, ef_trade, ef_regulation, ef_score, code)D5_0_Human_freedom_index <- D5_0_Human_freedom_index %>%rename(pf_law =names(D5_0_Human_freedom_index)[5], # Renames the 5th column to "pf_law"pf_security =names(D5_0_Human_freedom_index)[6] # Renames the 6th column to "pf_security" )
After renaming the columns pf_law/security for comprehension purpose, we have investigated how are distributed the NA values among the countries and the variables. After having found the percentages of missing values per country and variable, heatmaps revealed themself to be a great tool for visualizing datas.
Then, for having a better understanding of the situation, we ordered the countries having at least 1 variable containing 50% and more of missing values
na_long <- na_long %>%group_by(country) %>%mutate(Count_NA_50_100 =sum(NA_Percentage >=50& NA_Percentage <=100, na.rm =TRUE)) %>%ungroup() %>%arrange(desc(Count_NA_50_100))heatmap_ordered_all <-ggplot(na_long, aes(x =reorder(country, -Count_NA_50_100), y = Variable)) +geom_tile(aes(fill = NA_Percentage), colour ="white") +scale_fill_gradient(low ="white", high ="red") +theme_minimal() +labs(title ="Heatmap of NA Percentages per Country and Variable",x ="Countries",y ="Variables",fill ="NA Percentage" ) +theme(axis.text.x =element_blank(), # Hide x-axis labelsaxis.text.y =element_text(size =9) )print(heatmap_ordered_all)
We notice that only some countries look to contain at least 50% of missing values and in addition that most of the missing values are concerning the EF variables (Economic Freedom). Now, we tried to produce another heatmap only containing the ordered countries, and also counting for each one of these country the number of variables with at least 50% of NAs.
We conclude here that 13 countries were concerned by our selection of 50% and more of missing values. When discussing between us, we came to the conclusion that among these 13 countries, a great part of them were not going to be selected because they had a lot of missing values in our main dataset too. Therefore, we have decided to merge this data with the other datasets and finish the cleaning after.
Here are the first few lines of the partialy cleaned dataset on Human Freedom Index scores:
6. Dataset on Disasters
For this dataset concerning the Disasters we imported the data from Kaggle as we couldn’t find the original dataset that is private coming from the EOSDIS SYSTEM, an interactive interface for browsing full-resolution, global, daily satellite images from NASA. Once we made sure that our file called “Disasters” was convert into a data frame, we selected some specific columns that we where interested in.
Because we knew that our file showed all the disasters in each country over the years (between 1970-2021) and that we wanted to focus on a specific period, we filtered our data to show the years between 2000 and 2022. Then we rearranged our data, changing the data types of all the columns and their names in order to match our other datasets.
# Rearrange the columns, changed the type of data, renamed the columnsRearanged_Disasters <- Disasters %>%filter(Year >=2000& Year <=2022) %>%mutate(code =as.character(ISO),country =as.character(Country),year =as.integer(Year),continent =as.character(Continent),disaster.subgroup =as.character(Disaster.Subgroup),disaster.type =as.character(Disaster.Type),location =as.character(Location),total.deaths =as.numeric(Total.Deaths),no.injured =as.numeric(No.Injured),no.affected =as.numeric(No.Affected),no.homeless =as.numeric(No.Homeless),total.affected =as.numeric(Total.Affected),total.damages =as.numeric(Total.Damages...000.US..) )
We then grouped the data by “year”, “code”, “country” and “continent” and summarize the data. Here you can see that we re-selected specific columns as we saw that our first pre-selection was still too wide and some variables as the disaster.subgroup and disaster.type weren’t pertinent.We arranged the columns based on “code,” “country,” “year,” and “continent” to match the other datasets.
Finally we filtered our disasters data to keep only the countries that are present in our main dataset. We analysed the missing countries and identified three countries (BHR, BRN, MLT) that are unexpectedly missing.
D6_0_Disasters <- D6_0_Disasters %>%filter(code %in% list_country)length(unique(D6_0_Disasters$code))#> [1] 163# Here we see which countries are missinglist_country_disasters <-c(unique(D6_0_Disasters$code))(missing <-c(missing,setdiff(list_country, list_country_disasters)))#> [1] "AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB" "BHR" "BRN" "MLT"
Here are the first few lines of the cleaned dataset on Disasters:
7. Dataset on COVID
This dataset contains information on the COVID19 pandemic between 2020 and 2022. The observation are by year, month, day. After importing the database, we transform the date in format YYYY-MM-DD in order to only keep the year.
We perform a first round of investigation of the missing values before aggregating the values by year. We begin with the variables “cases per million” and “deaths per million”: seeing that for each country, we have either only missing values, either a very low percentage of missing values (~1%), we can compute the sum over each year and ignore the missing values without altering the data. Indeed, where al the values are missing, the computation will return a NA. We then look at the “stringency” variable and we have 3 scenarios:
~20% missings: we ignore missing values when computing the mean to have an idea of stringency each year (because we compute the mean stringency over the year, if some days are missing, it is not a problem, it can not evoluate that fast).
all are missing : we can ignore the missing values when computing the mean, because it will still return a missing value
almost all are missing: here the mean doesn’t make sense -> we will replace the values by NAs to be coherent. The countries with this issues are: ERI, GUM, PRI and VIR. We verify if they are in our main dataset and since none of these countries are, we can ignore the issue, the lines will be remove later anyway.
We aggregate the observations of all days of a year in one observation per country using the mean.
Now that all the variables of interest are aggregated by year, we remove all the variables that we don’t need and rename all the remaining variables to match the main dataset.
We remove the years that exceed 2022, we make sure that the country codes are all iso codes with 3 letters (we observe that sometimes they are preceded by “OWID_”) and we standardize the country codes.
We remove the observations of countries that aren’t in our main dataset on SDGs and find that all the 166 countries that we have in the main SDG dataset are also in this one.
We perform a second round of missing values investigation and find out that there are no missing values except for the stringency, where there are 4.19%. Either all values are missing for one country, or 50% are missing, so these 7 countries won’t be included when analyzing the effect of stringency on the SDG scores.
mean(is.na(COVID$cases_per_million))#> [1] 0mean(is.na(COVID$deaths_per_million))#> [1] 0mean(is.na(COVID$stringency))#> [1] 0.0419COVID4 <- COVID %>%group_by(code) %>%summarize(NaCOVID =mean(is.na(stringency))) %>%filter(NaCOVID !=0)print(COVID4, n =300)#> # A tibble: 7 x 2#> code NaCOVID#> <chr> <dbl>#> 1 ARM 1 #> 2 COM 1 #> 3 MDV 1 #> 4 MKD 1 #> 5 MNE 1 #> 6 NAM 0.5#> 7 STP 1D7_0_COVID <- COVID
Here are the first few lines of the cleaned dataset on COVID19:
8. Dataset on Conflicts
For our conflicts dataset, we imported the data from “The World Banck” data catalog. Once we made sure that our file called “Disasters” was convert into a data frame, we selected some specific columns that we where interested in.
Our file showed all the Conflicts and consequences per country over the years (between 2000-2016). We couldn’t find a better and more complete dataset, As we consider conflicts as events, we will only take into account results between 2000 and 2016. Then we rearranged our data, changing the data types of all the columns and their names in order to match our other datasets. We grouped the data by ” year”, “country”, re-selected some variables and summarize the data.
Rearanged_Conflicts <- Conflicts %>%filter(year >=2000& year <=2022)%>%mutate(ongoing =as.integer(ongoing),country =as.character(country),year =as.integer(year),gwsum_bestdeaths =as.numeric(gwsum_bestdeaths),pop_affected =as.numeric(pop_affected),area_affected =as.numeric(area_affected),maxintensity =as.numeric(maxintensity), )# Group the data by "year", "country" and summarize the dataConflicts <- Rearanged_Conflicts %>%group_by(year, country) %>%summarize(ongoing =sum (ongoing, na.rm =TRUE),sum_deaths =sum(gwsum_bestdeaths, na.rm =TRUE),pop_affected =sum(pop_affected, na.rm =TRUE),area_affected =sum(area_affected, na.rm =TRUE),maxintensity =sum(maxintensity, na.rm =TRUE), )
After we Selected specific columns from the summarized data and arrange the data by our specified columns. To make our dataset compatible with the main one and let the merging face succeed, we dd some adjustment concerning the country names’ to ensure the compatibility. Then we standardize and merge by country names to finally rearrange the data to retain only the countries present in our main dataset. Note that in the end we can see that only one country is missing that wasn’t in the initial conflicts database: BLR
conflicts <- Conflicts %>%select(country, year, ongoing, sum_deaths, pop_affected, area_affected, maxintensity) %>%arrange(country, year)conflicts$country <-iconv(conflicts$country, to ="UTF-8", sub ="byte")conflicts <- conflicts %>%mutate(country =countrycode(country, "country.name", "country.name"))conflicts <- conflicts %>%left_join(D1_0_SDG_country_list, by ="country")conflicts <- conflicts %>%select(code, country, year, ongoing, sum_deaths, pop_affected, area_affected, maxintensity) %>%arrange(code, country, year)D8_0_Conflicts <- conflicts %>%filter(code %in% list_country)(length(unique(conflicts$code)))#> [1] 166# See which countries are missinglist_country_conflicts <-c(unique(conflicts$code))(missing <-c(missing, setdiff(list_country, list_country_conflicts)))#> [1] "AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB" "BHR" "BRN" "MLT"#> [11] "BLR"
Here are the first few lines of the cleaned dataset on Conflicts:
Merge data
By merging our eight pre-cleaned datasets, we create a final database.
Since we took the information on the continent and region from databases that are not the main one, we complete these inforamtion for the whole final dataset.
Here are the first few lines of the final dataset:
Final structure of our merged database: each country of the 166 countries from D1_1_SDG are observed each year from 2000 to 2022, thus each row has a key composed of (code, year) that uniquely identifies an observation. The other columns are the variables listed above. Due to some countries having a lot of missing information we will have to eliminate some of them, but we will still have more than 2000 rows in our database.
Treatment of missing values
We load our final database and subset it according to the data that we will need in order to answer the different questions. This will help us dealing with the missing values.
For question 1, we only keep the years until 2020, because most of the explanatory variables that we want to use (those coming from the human freedom index) only have values until 2020.
For question 3, we create 3 distinct databases according to the different type of event that we wwill analyse: disasters, COVID19 and conflicts. For the disasters, we only keep the years until 2021, because after this date, we don’t have data. For the conflicts, we only keep the years until 2016, because after this date, we don’t have data.
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. We decide to remove the countries that have more than 50 missing values.
Here is the dataframe that allows us to see the countries that have missing values, how many and for which variables, when there are more than 50 in total.
code
unemployment.rate
GDPpercapita
MilitaryExpenditurePercentGDP
MiliratyExpenditurePercentGovExp
internet_usage
hf_score
pf_law
pf_security
pf_movement
pf_religion
pf_assembly
pf_expression
pf_identity
pf_score
ef_government
ef_legal
ef_money
ef_trade
ef_regulation
ef_score
num_missing
BHS
0
0
21
21
0
0
14
0
0
0
0
0
0
0
0
0
0
0
0
0
56
BTN
0
0
21
21
0
13
0
0
0
0
0
0
0
13
0
0
13
13
10
13
117
COM
0
0
21
21
3
19
0
0
0
0
0
0
0
19
19
19
19
19
19
19
197
CPV
0
0
0
0
0
10
0
0
0
0
0
0
0
10
0
0
10
10
10
10
60
DJI
0
0
15
12
0
19
0
0
0
0
0
0
0
19
19
19
19
19
19
19
179
GIN
0
0
7
7
0
13
0
0
0
0
0
0
0
13
0
0
13
13
11
13
90
GMB
0
0
0
0
0
10
0
0
0
0
0
0
0
10
2
0
10
10
10
10
62
IRQ
0
0
4
4
2
16
0
0
0
0
0
0
0
16
3
0
16
16
16
16
109
KHM
0
0
0
0
3
10
0
0
0
0
0
0
0
10
0
0
10
10
10
10
63
LAO
0
0
7
7
0
14
0
0
0
0
0
0
0
14
0
0
14
14
13
14
97
LBN
0
0
0
0
0
10
0
0
0
0
0
0
0
10
0
0
10
10
10
10
60
LBR
0
0
0
0
2
14
0
0
0
0
0
0
0
14
0
0
14
14
10
14
82
QAT
0
0
12
12
0
10
0
0
0
0
0
0
0
10
7
0
10
10
10
10
91
SAU
0
0
0
0
0
10
0
0
0
0
0
0
0
10
0
0
10
10
10
10
60
SDN
0
0
5
5
9
16
0
0
0
0
0
0
0
16
0
0
16
16
16
16
115
SOM
0
13
21
13
4
19
0
0
0
0
0
0
0
19
19
19
19
19
19
19
203
SUR
0
0
21
21
0
10
0
0
0
0
0
0
10
10
5
0
10
10
10
10
117
SWZ
0
0
0
21
3
10
0
0
0
0
0
0
0
10
0
0
10
10
10
10
84
TJK
0
0
4
5
3
10
0
0
0
0
0
0
0
10
0
0
10
10
10
10
72
YEM
0
1
5
5
3
10
0
0
0
0
0
0
0
10
10
0
10
10
10
10
84
Now, looking at the remaining countries that have missing values and there number accross all variables, we decide to remove MilitaryExpenditurePercentGovExp, because it has too many missing values and it contains similar information to MilitaryExpenditurePercentGDP.
Here is the dataframe that allows us to see the countries that have missing values, how many and for which variables, after remoying the countries with more than 50.
code
unemployment.rate
GDPpercapita
MilitaryExpenditurePercentGDP
MiliratyExpenditurePercentGovExp
internet_usage
hf_score
pf_law
pf_security
pf_movement
pf_religion
pf_assembly
pf_expression
pf_identity
pf_score
ef_government
ef_legal
ef_money
ef_trade
ef_regulation
ef_score
num_missing
AGO
0
0
0
0
0
5
0
0
0
0
0
0
0
5
0
0
5
5
5
5
30
ARE
0
0
6
6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
12
ARM
0
0
0
0
0
4
0
0
0
0
0
0
0
4
0
0
4
4
3
4
23
AUS
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
AZE
0
0
0
0
2
4
0
0
0
0
0
0
0
4
0
0
4
4
0
4
22
BDI
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
BFA
0
0
0
0
0
5
0
0
0
0
0
0
0
5
0
0
5
5
5
5
30
BIH
0
0
2
0
0
5
0
0
0
0
0
0
0
5
0
0
5
5
5
5
32
BLZ
0
0
0
0
3
0
13
0
0
0
0
0
0
0
0
0
0
0
0
0
16
BRB
0
0
21
21
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
45
CAF
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
CIV
0
0
0
21
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
22
COD
0
0
0
21
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
21
COG
0
0
6
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
9
CRI
0
0
21
21
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
42
ETH
0
0
0
0
0
5
0
0
0
0
0
0
0
5
0
0
5
5
5
5
30
FJI
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
GEO
0
0
0
0
0
2
0
0
0
0
0
0
0
2
0
0
3
2
0
2
11
GUY
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
HTI
0
0
13
13
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
26
ISL
0
0
21
21
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
42
JAM
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
JOR
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
KAZ
0
0
0
0
0
5
0
0
0
0
0
0
0
5
0
0
5
5
5
5
30
KGZ
0
0
0
0
0
5
0
0
0
0
0
0
0
5
1
0
5
5
5
5
31
LKA
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6
LSO
0
0
0
0
0
5
0
0
0
0
0
0
0
5
0
0
5
5
5
5
30
MDA
0
0
0
0
3
5
0
0
0
0
0
0
0
5
0
0
5
5
5
5
33
MDG
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
MKD
0
0
0
0
0
3
0
0
0
0
0
0
0
3
0
0
3
3
0
3
15
MMR
0
0
6
6
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
13
MNE
0
0
0
0
4
5
0
0
0
0
0
0
0
5
0
0
5
5
5
5
34
MNG
0
0
0
0
4
4
0
0
0
0
0
0
0
4
0
0
4
4
0
4
24
MOZ
0
0
0
0
0
3
0
0
0
0
0
0
0
3
0
0
3
3
0
3
15
MRT
0
0
0
7
0
5
0
0
0
0
0
0
0
5
0
0
5
5
5
5
37
MWI
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
NER
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
PAK
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
PAN
0
0
21
21
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
42
PNG
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
RWA
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
2
SRB
0
0
0
0
4
5
0
0
0
0
0
0
0
5
2
0
5
5
5
5
36
SYR
0
0
10
10
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
20
TCD
0
0
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
TGO
0
0
5
5
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
10
TTO
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
USA
0
0
0
21
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
21
VEN
0
5
0
21
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
29
VNM
0
0
5
5
0
2
0
0
0
0
0
0
0
2
0
0
3
2
0
2
21
ZWE
0
0
3
8
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
11
GDP per capita
Only Venezuela has missing values that we can not fill, so we delete the country.
Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values, where there are less than 30% missing using the median by region.
question1_missing_Military <- data_question1 %>%group_by(code) %>%mutate(PercentageMissing =mean(is.na(MilitaryExpenditurePercentGDP))) %>%# Column % NAsungroup() %>%group_by(region) %>%filter(sum(PercentageMissing, na.rm =TRUE) >0)Freq_Missing_Military <-ggplot(data = question1_missing_Military) +geom_histogram(aes(x = MilitaryExpenditurePercentGDP, fill =cut(PercentageMissing,breaks =c(0, 0.1, 0.2, 0.3, 1),labels =c("0-10%", "10-20%", "20-30%", "30-100%"))),bins =30) +labs(title ="Distribution of Military expenditures in % of GDP", x ="Military expenditures in % of GDP", y ="Frequency") +scale_fill_manual(values =c("0-10%"="blue", "10-20%"="green", "20-30%"="red","30-100%"="black"), labels =c("0-10%", "10-20%", "20-30%","30-100%")) +guides(fill =guide_legend(title ="% missings")) +facet_wrap(~ region, nrow =3)print(Freq_Missing_Military)data_question1 <- data_question1 %>%group_by(code) %>%mutate(PercentageMissingByCode =mean(is.na(MilitaryExpenditurePercentGDP)) ) %>%ungroup() %>%group_by(region) %>%mutate(MedianByRegion =median(MilitaryExpenditurePercentGDP, na.rm =TRUE),MilitaryExpenditurePercentGDP =ifelse( PercentageMissingByCode <0.3&!is.na(MilitaryExpenditurePercentGDP), MilitaryExpenditurePercentGDP,ifelse(PercentageMissingByCode <0.3, MedianByRegion, MilitaryExpenditurePercentGDP) ) ) %>%select(-PercentageMissingByCode, -MedianByRegion)
We look at the evolution of the variable over time. We fill the missing values with linear interpolation, because all evolutions are in an increasing way and are almost straight lines, except for CIV that we delete.
question1_missing_Internet <- data_question1 %>%group_by(code) %>%mutate(PercentageMissing =mean(is.na(internet_usage))) %>%# Column % NAsfilter(code %in% question1_missing_Internet$code)Evol_Missing_Internet <-ggplot(data = question1_missing_Internet) +geom_point(aes(x = year, y = internet_usage, color =cut(PercentageMissing,breaks =c(0, 0.1, 0.2, 0.3, 1),labels =c("0-10%", "10-20%", "20-30%", "30-100%")))) +labs(title ="Evolution of internet usage over time", x ="Years from 2000 to 2022", y ="Internet usage") +scale_color_manual(values =c("0-10%"="blue", "10-20%"="green", "20-30%"="red", "30-100%"="black"),labels =c("0-10%", "10-20%", "20-30%", "50-100%")) +guides(color =guide_legend(title ="% missings")) +scale_x_continuous(breaks=NULL)+facet_wrap(~ code, nrow =4)print(Evol_Missing_Internet)list_code <-setdiff(unique(question1_missing_Internet$code), "CIV")for (i in list_code) { country_data <- data_question1 %>%filter(code == i) interpolated_data <-na.interp(country_data$internet_usage) data_question1[data_question1$code == i, "internet_usage"] <- interpolated_data}data_question1 <- data_question1 %>%filter(code!="CIV")list_country_deleted <-c(list_country_deleted, "CIV")
Human freedom index
First, we remove hf_score, pf_score and ef_score, because there are many missing values and since these variables summarize the other ones, deleting the will not make us loose information.
Only KGZ and SRB have missing values, we plot the values over time and fill in the missing values by the year before, since there are only one and two missing values.
Evol_Missing_ef_gov <- data_question1 %>%group_by(code) %>%filter(code=="KGZ")ggplot(Evol_Missing_ef_gov, aes(x = year, y = ef_government)) +geom_point() +labs(title ="Evolution of economic freedom: government over time in KGZ", x ="Years", y ="ef_gov")Evol_Missing_ef_gov <- data_question1 %>%group_by(code) %>%filter(code=="SRB")ggplot(Evol_Missing_ef_gov, aes(x = year, y = ef_government)) +geom_point() +labs(title ="Evolution of economic freedom: government over time in SRB", x ="Years", y ="ef_gov")data_question1 <- data_question1 %>%mutate(ef_government =ifelse(code =="KGZ"& year ==2000&is.na(ef_government), ef_government[which(code =="KGZ"& year ==2001)], ef_government))data_question1 <- data_question1 %>%mutate(ef_government =ifelse(code =="SRB"& year ==2000&is.na(ef_government), ef_government[which(code =="SRB"& year ==2002)], ef_government))data_question1 <- data_question1 %>%mutate(ef_government =ifelse(code =="SRB"& year ==2001&is.na(ef_government), ef_government[which(code =="SRB"& year ==2002)], ef_government))
Economic freedom: money
18 countries have missing values, but the percentage of missing values is always below 25%.
We look at the evolution of the variable over time. For the countries where this evolution is linear, we fill in the missing values using linear interpolation.
question1_missing_ef_money <- data_question1 %>%group_by(code) %>%mutate(PercentageMissing =mean(is.na(ef_money))) %>%# Column % NAsfilter(code %in% question1_missing_ef_money$code)Evol_Missing_ef_money <-ggplot(data = question1_missing_ef_money) +geom_point(aes(x = year, y = ef_money, color =cut(PercentageMissing,breaks =c(0, 0.1, 0.2, 0.3, 1),labels =c("0-10%", "10-20%", "20-30%", "30-100%")))) +labs(title ="Evolution of economiv freedom: money over time", x ="Years from 2000 to 2022", y ="ef_money") +scale_color_manual(values =c("0-10%"="blue", "10-20%"="green", "20-30%"="red", "30-100%"="black"),labels =c("0-10%", "10-20%", "20-30%", "50-100%")) +guides(color =guide_legend(title ="% missings")) +facet_wrap(~ code, nrow =4) +scale_x_continuous(breaks =NULL)print(Evol_Missing_ef_money)list_code <-c("ARM", "BFA", "BIH", "GEO", "KAZ", "LSO", "MDA", "MKD")for (i in list_code) { country_data <- data_question1 %>%filter(code == i) interpolated_data <-na.interp(country_data$ef_money) data_question1[data_question1$code == i, "ef_money"] <- interpolated_data}
Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values using the median by region.
We look at the evolution of the variable over time. For the countries where this evolution is linear, we fill in the missing values using linear interpolation.
Evol_Missing_ef_trade <-ggplot(data = question1_missing_ef_trade) +geom_point(aes(x = year, y = ef_trade, color =cut(PercentageMissing,breaks =c(0, 0.1, 0.2, 0.3, 1),labels =c("0-10%", "10-20%", "20-30%", "30-100%")))) +labs(title ="Evolution of economic freedom: trade over time", x ="Years from 2000 to 2022", y ="ef_trade") +scale_color_manual(values =c("0-10%"="blue", "10-20%"="green", "20-30%"="red", "30-100%"="black"),labels =c("0-10%", "10-20%", "20-30%", "50-100%")) +guides(color =guide_legend(title ="% missings")) +facet_wrap(~ code, nrow =4) +scale_x_continuous(breaks =NULL)print(Evol_Missing_ef_trade)# Linear interpolation for "AZE", "BFA", "ETH", "GEO", "VNH"list_code <-c("AZE", "BFA", "ETH", "GEO", "VNH")for (i in list_code) { country_data <- data_question1 %>%filter(code == i) interpolated_data <-na.interp(country_data$ef_trade) data_question1[data_question1$code == i, "ef_trade"] <- interpolated_data}
Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values using the median by region.
We look at the evolution of the variable over time. For the countries where this evolution is linear, we fill in the missing values using linear interpolation.
question1_missing_ef_regulation <- data_question1 %>%group_by(code) %>%mutate(PercentageMissing =mean(is.na(ef_regulation))) %>%filter(code %in% question1_missing_ef_regulation$code)Evol_Missing_ef_regulation <-ggplot(data = question1_missing_ef_regulation) +geom_point(aes(x = year, y = ef_regulation, color =cut(PercentageMissing,breaks =c(0, 0.1, 0.2, 0.3, 1),labels =c("0-10%", "10-20%", "20-30%", "30-100%")))) +labs(title ="Evolution of economic freedom: regulation over time", x ="Years from 2000 to 2022", y ="ef_regulation") +scale_color_manual(values =c("0-10%"="blue", "10-20%"="green", "20-30%"="red", "30-100%"="black"),labels =c("0-10%", "10-20%", "20-30%", "50-100%")) +guides(color =guide_legend(title ="% missings")) +facet_wrap(~ code, nrow =4)print(Evol_Missing_ef_regulation)list_code <-c("ETH", "KAZ", "MDA", "SRB")for (i in list_code) { country_data <- data_question1 %>%filter(code == i) interpolated_data <-na.interp(country_data$ef_regulation) data_question1[data_question1$code == i, "ef_regulation"] <- interpolated_data}
Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values using the median by region.
Now, we notice that there were only missing values for goals 1 and 10. As we did before, we have started to investigate where are located the NAs in our dataset for first goal1, then goal 10.
# goal1question1_missing_goal1 <- data_question1 %>%group_by(code) %>%summarize(Na_goal1 =mean(is.na(goal1)))%>%filter(Na_goal1 !=0)data_question1 <- data_question1 %>%filter(!code %in% question1_missing_goal1$code)# Update List of countries deletedlist_country_deleted <-c(list_country_deleted, "KWT","NZL","OMN","SGP","UKR")# still 42 NA values goal10
We had found that the missing values were located in only 5 countries. So we have decided to get rid of them. At this stage, there were only 42 remaining missing values. Then, same step for goal 10.
#goal10question1_missing_goal10 <- data_question1 %>%group_by(code) %>%summarize(Na_goal10 =mean(is.na(goal10)))%>%filter(Na_goal10 !=0)data_question1 <- data_question1 %>%filter(!code %in% question1_missing_goal10$code)# Update List of countries deletedlist_country_deleted <-c(list_country_deleted, "GUY","TTO")
We have found the 2 lasts contries containing missing values. Now, our dataset is completely clean and ready to be used for our question 1.
Data for question 2 and 4
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.
Disasters
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. We find out that there are many missing values and here are the first few lines identifying them by country.
In this particular case, even if there are many missing values in our disaster dataset, we made the hypothesis that disaster events can not happen every year for every country given that these are uncontrollable and non-recurring events. Therefore the NAs that we encounter will become zeroes, implying that there have been no climatic disasters.
data_question3_1[is.na(data_question3_1)] <-0
COVID19
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed.Two countries have missing values, we remove them (MNE and SRB).
We display the distribution of the different SDG achievement scores, using boxplots to have an overview of the median, the range with most of the observations and the outliers.
data_question1 <-read.csv(here("scripts","data","data_question1.csv"))data_question24 <-read.csv(here("scripts", "data", "data_question24.csv"))data_question2 <-read.csv(here("scripts", "data", "data_question24.csv"))data_question3_1 <-read.csv(here("scripts", "data", "data_question3_1.csv"))data_question3_2 <-read.csv(here("scripts", "data", "data_question3_2.csv"))data_question3_3 <-read.csv(here("scripts", "data", "data_question3_3.csv"))Q3.1<-read.csv(here("scripts", "data", "data_question3_1.csv"))Q3.2<-read.csv(here("scripts", "data", "data_question3_2.csv"))Q3.3<-read.csv(here("scripts", "data", "data_question3_3.csv"))data <-read.csv(here("scripts", "data", "all_Merge.csv"))Correlation_overall <- data_question1 %>%select(population:ef_regulation)#### boxplots #####for goals#dev.off()boxplot(Correlation_overall[2:18], las =2, # Makes the axis labels perpendicular to the axispar(mar =c(5, 4, 4, 2) +0.1), # Adjusts the margins to fit all labelscex.axis =0.7, # Reduces the size of the axis labelscex.lab =1, # Reduces the size of the x and y labelsnotch =TRUE, # Specifies whether to add notches or notmain ="Merged goals boxplot", # Title of the boxplotxlab ="Goals", # X-axis labelylab ="Score") # Y-axis label
We see different schemes among the different goals. Indeed some are quite homogeneous with a small spread of values (e.g. overall score, goals 2 and 8) while others have a large spread of values (e.g. goals 1 and 10). Goals 1, 3, 4, 7, 9, 10 and 13 have values across all possible percentages. Goals 2, 5, 8, 13 and 17 have extreme values situated outside the 95% confidence interval. It is interesting to see that goal 8 (decent work and economic growth) is the one with smaller spread of values, whereas goal 1 (no poverty) have the higher distance between the first and the third quartile. Goal 2 (no hunger) has a tight spread of values, but with the greater amount of outliers in the smaller values, meaning hunger is similar across most countries, but when it differs it is in very bad manner.
We now display boxplpots for the different variables of the human freedom index, and then also for our other independent variables.
#for Human Freedom Index scores boxplot(Correlation_overall[23:34], las =2, # Makes the axis labels perpendicular to the axispar(mar =c(7, 5, 2, 1)), # Adjusts the margins to fit all labelscex.axis =0.7, # Reduces the size of the axis labelscex.lab =1, # Reduces the size of the x and y labelsnotch =TRUE, # Specifies whether to add notches or notmain ="Merged Human Freedom Index scores boxplot", # Title of the boxplotxlab ="Categories", # X-axis labelylab ="Score") # Y-axis label# for the remaining variablespar(mfrow=c(2,3))for (i in19:22){boxplot(Correlation_overall[,i], main=names(Correlation_overall[i]), type="l")}par(mfrow=c(1,1))
We now look at the variables in a summary table to have a more precise view of the numbers.
X
code
year
country
continent
region
overallscore
goal1
goal2
goal3
goal4
goal5
goal6
goal7
goal8
goal9
goal10
goal11
goal12
goal13
goal15
goal16
goal17
Min. : 1
Length:3565
Min. :2000
Length:3565
Length:3565
Length:3565
Min. :37.4
Min. : 0.0
Min. :16.5
Min. : 5.9
Min. : 0.0
Min. : 3.5
Min. :23.3
Min. : 0.1
Min. :40.0
Min. : 0.3
Min. : 0.0
Min. :20.3
Min. :32.9
Min. : 0.0
Min. :26.0
Min. :27.9
Min. :15.1
1st Qu.: 892
Class :character
1st Qu.:2005
Class :character
Class :character
Class :character
1st Qu.:55.0
1st Qu.: 44.5
1st Qu.:52.6
1st Qu.:44.3
1st Qu.: 55.6
1st Qu.:43.2
1st Qu.:53.0
1st Qu.:41.5
1st Qu.:64.0
1st Qu.:15.5
1st Qu.: 35.2
1st Qu.:55.8
1st Qu.:67.9
1st Qu.:72.9
1st Qu.:55.0
1st Qu.:51.5
1st Qu.:46.1
Median :1783
Mode :character
Median :2011
Mode :character
Mode :character
Mode :character
Median :65.5
Median : 87.4
Median :58.9
Median :70.9
Median : 80.6
Median :58.0
Median :65.3
Median :65.5
Median :70.2
Median :29.4
Median : 62.2
Median :75.3
Median :84.6
Median :90.8
Median :65.1
Median :61.4
Median :55.4
Mean :1783
NA
Mean :2011
NA
NA
NA
Mean :64.0
Mean : 71.7
Mean :58.0
Mean :64.1
Mean : 72.0
Mean :56.0
Mean :65.0
Mean :57.9
Mean :70.0
Mean :37.5
Mean : 58.3
Mean :70.3
Mean :79.3
Mean :82.1
Mean :65.0
Mean :62.6
Mean :55.7
3rd Qu.:2674
NA
3rd Qu.:2017
NA
NA
NA
3rd Qu.:72.4
3rd Qu.: 98.8
3rd Qu.:65.3
3rd Qu.:81.4
3rd Qu.: 94.5
3rd Qu.:68.9
3rd Qu.:75.2
3rd Qu.:72.6
3rd Qu.:76.6
3rd Qu.:53.9
3rd Qu.: 81.6
3rd Qu.:85.1
3rd Qu.:94.1
3rd Qu.:97.2
3rd Qu.:74.3
3rd Qu.:74.6
3rd Qu.:65.1
Max. :3565
NA
Max. :2022
NA
NA
NA
Max. :86.8
Max. :100.0
Max. :83.4
Max. :97.3
Max. :100.0
Max. :94.0
Max. :95.1
Max. :99.6
Max. :88.7
Max. :99.2
Max. :100.0
Max. :99.1
Max. :99.0
Max. :99.9
Max. :97.9
Max. :96.0
Max. :96.8
NA
NA
NA
NA
NA
NA
NA
NA's :276
NA
NA
NA
NA
NA
NA
NA
NA
NA's :276
NA
NA
NA
NA
NA
NA
Focus on the influence of the factors over the SDG scores
After importing our our cleaned data, we looked first at the correlations between our numerical variables.
By doing so, we obtain a lot of positive and negative correlations. To help us to better understand and having a overall vision of the situation, we used the following heatmap.
#### Heatmap ####cor_melted <-melt(cor_matrix)ggplot(data = cor_melted, aes(Var1, Var2, fill = value)) +geom_tile() +scale_fill_gradient2(low ="blue", high ="red", mid ="white", midpoint =0, limit =c(-1, 1), space ="Lab", name="Pearson\nCorrelation") +theme_minimal() +theme(axis.text.x =element_text(angle =45, vjust =1, size =8, hjust =1),axis.text.y =element_text(size =8)) +coord_fixed() +labs(x ='', y ='', title ='Correlation Matrix Heatmap')
In the correlation matrix heatmap, we can notice that many goals from 1 to 11 are actually positively correlated together. On another hand, the goals 12 and 13 are have negative relationships with the majority of our variables, except between themself, whereas they are strongely correlated. In addition, we can notice another strongly correlation between personal freedom variables (pf) related to the scores given by the Human Freedom Index on movement, religion, assembly and expression.
In order to have an overview of the relationship between our independent variables and the SDG overall score, we make several graphs containing the Pearson correlation coefficient between the variable, the scatter plots describing the relationship between the variables, as well as the distribution of each variable.
#### Pearson's correlation coeff ####panel.hist <-function(x, ...){ usr <-par("usr"); on.exit(par(usr)) par(usr =c(usr[1:2], 0, 1.5) ) h <-hist(x, plot =FALSE) breaks <- h$breaks; nB <-length(breaks) y <- h$counts; y <- y/max(y) rect(breaks[-nB], 0, breaks[-1], y, col ="lightgreen", ...)}panel.cor <-function(x, y, digits =2, prefix ="", cex.cor, ...){ usr <-par("usr"); on.exit(par(usr)) par(usr =c(0, 1, 0, 1)) r <- (cor(x, y)) txt <-format(c(r, 0.123456789), digits = digits)[1] txt <-paste0(prefix, txt) if(missing(cex.cor)) cex.cor <-0.8/strwidth(txt) text(0.5, 0.5, txt, cex = cex.cor * r)}# Independent variables pairs(data_question1[,c("overallscore", "unemployment.rate", "GDPpercapita", "MilitaryExpenditurePercentGDP", "internet_usage")], upper.panel=panel.cor, diag.panel=panel.hist, main="Correlation table and distribution of various variables")
The overall SDG achievement score is highly correlated with the percentage of people using the internet (r=.79) and GDP per capita (r=.60). The unemployement rate as well as the military expenditures in percentage of GDP per capita do not seem to play a role. However, this is only for the overall score.
pairs(data_question1[,c("overallscore", "pf_law", "pf_security", "pf_movement", "pf_religion", "pf_assembly", "pf_expression", "pf_identity")], upper.panel=panel.cor, diag.panel=panel.hist, main="Correlation table and distribution of personal freedom variables")
The overall SDG achievement score is highly correlated with “personal freedom: law” (p=.69) and “personal freedom: identity” (p=.62). The other dimensions of personal freedom do not seem to have important influence. Regarding the distribution of the personal freedom variables, we notice that except for law, all have right-skewed distributions meaning that most of the countries have high scores.
pairs(data_question1[,c("overallscore", "ef_government", "ef_legal", "ef_money", "ef_trade", "ef_regulation")], upper.panel=panel.cor, diag.panel=panel.hist, main="Correlation table and distribution of economic freedom variables")
The overall SDG achievement score is highly correlated with “economical freedom: legal” (p=.77), “economical trade: legal” (p=.67) and “economical freedom: money” (p=.6), while the other dimensions of economic freedom do not seem to have important influence. Regarding the distribution of the economic freedom variables, we notice more heterogeneous distributions and scores across the various countries than for personal freedom.
Concerning the SDG goals, we conclude that most of our variables are going along the 1st component, except the goals 10 and 15 that are rather uncorrelated with the dimension 1. In addition, as seen before, the goals 12 and 13 are negatively correlated to the other goals. With a eigenvalue bigger than 1 for the first two components, we conclude that there are only 2 dimensions to take into account, according to the Kaiser-Guttman’s rule. Nevertheless, they are explaining less than 80% of cumulated variance.
Now concerning the Human Freedom Index scores, most of the variables are positively correlated to the dimension 1, slightly less for the PF religion and security, and finaly the EF government variable is uncorrelated to the dimension 1. With a eigenvalue bigger than 1 for the three first components, we conclude that there are 3 dimensions to take into account. Nevertheless, again, they are explaining less than 80% of cumulated variance.
Due to the large number of data, the visualization of the clusters using the kmean method is not really relevant. In addition, by clustering our data, we are trying to get group that differ from eachother but with little variation of the observations within the same cluster. Here, only 60.6% of the variance is explained by the variation between clusters. This is not enough.
Focus on the evolution of SDG scores over time
First, we look at the evolution of SDG achievement overall score over time for the whole world, by continent and by region.
data1 <- data_question2 %>%group_by(year) %>%mutate(mean_overall_score_by_year=mean(overallscore))ggplot(data1) +geom_line(mapping=aes(x=year, y=mean_overall_score_by_year), color="blue", lwd=1) +scale_y_continuous(limits =c(0, 100)) +labs(title ="Evolution of the mean overall SDG achievement score across the world",y ="Mean Overall SDG Score",x ="Year" )
The general evolution of SDG scores around the world is increasing over the years.
data2 <- data_question2 %>%group_by(year, continent) %>%mutate(mean_overall_score_by_year=mean(overallscore))ggplot(data2) +geom_line(mapping=aes(x=year, y=mean_overall_score_by_year, color=continent), lwd=1) +scale_y_continuous(limits =c(0, 100)) +labs(title ="Evolution of the mean overall SDG achievement score by continent",y ="Mean Overall SDG Score",x ="Year" )
Looking at the continents, we see that Europe is above the others, while Africa is below, but in general, all have increasing overall scores.
data3 <- data_question2 %>%group_by(year, region) %>%mutate(mean_overall_score_by_year=mean(overallscore))ggplot(data3) +geom_line(mapping=aes(x=year, y=mean_overall_score_by_year, color=region), lwd=1) +scale_y_continuous(limits =c(0, 100)) +labs(title ="Evolution of the mean overall SDG achievement score by region",y ="Mean Overall SDG Score",x ="Year" )
This view that groups the countries by region gives us precision about the previous information. Indeed, it is Western Europe that is particularly above and Sub-Saharan Africa that is clearly below.
Second, we look at the evolution of SDG achievement scores(16) over time for the whole world, by continent and by region.
data4 <- data_question2 %>%group_by(year) %>%summarise(across(starts_with("goal"), mean, na.rm=TRUE)) %>%pivot_longer(cols =starts_with("goal"), names_to ="goal", values_to ="mean_value")color_palette <-c("red", "blue", "green", "orange", "purple", "pink", "brown", "gray", "cyan", "magenta", "yellow", "darkgreen", "darkblue", "darkred", "darkorange", "darkcyan")ggplot(data = data4) +geom_line(mapping =aes(x = year, y = mean_value, color = goal), size =0.7) +scale_color_manual(values = color_palette) +scale_y_continuous(limits =c(0, 100)) +labs(title ="Evolution of the mean SDG achievement scores across the world",y ="Mean SDG Scores",x ="Year" ) +guides(color =guide_legend(ncol =2, # Number of columnstitle.position ="top", # Position of the legend titletitle.hjust =0.5# Horizontal justification of the legend title ) )
Here, by looking at the SDGs individually, we notice that all SDGs except from goal 9 (industry innovation and infrastructure) are close to one another in terms of level and growth. Goal 9 starts far below the others in 2000 and growths faster until almost exceeding 50%.
ggplot(data = data4) +geom_line(mapping =aes(x = year, y = mean_value), size =0.7) +scale_color_manual(values = color_palette) +scale_y_continuous(limits =c(0, 100)) +labs(title ="Evolution of the mean SDG achievement scores across the world",y ="Mean SDG Scores",x ="Year" ) +facet_wrap(~ goal, nrow =4)
In contrast to the aspect discussed in the precedent graph, this graph shows us the same information in a different way and it pops out that some goals did not increase their scores much in the last two decades, for example goal 13 (climate action) and goal 12 (responsible consumption and production).
Now, comparing the SDG scores by continent, we observe that most of the time, Europe is at the top of the graph and Africa at the bottom, exept for goals 12 and 13 that are linked to ecology. Some other information stand out:
Americas are far behind the other parts of the world regarding goal 10: reduced inequalities.
Africa is far behind the other continents (even if becoming better) for goals 1, 3, 4 and 7.
Goal 9 (industry, innovation and infrastructure) show exponential growth for almost all continents.
Third we create an interactive map of the world to be able to navidagte from year 2000 to 2022, seeing the level of achievement of the SDGs (overall score) for each country. To open it in your browser, use this html file: . Here is only a non-interactive world map of the overall SDGs achievement scores, not taking into account the evolution over the years.
library(rnaturalearth)# Load world map dataworld <-ne_countries(scale ="medium", returnclass ="sf")# Merge data with the world map datadata0 <-merge(world, data_question2, by.x ="iso_a3", by.y ="code", all.x =TRUE)data0 %>%st_transform(crs ="+proj=robin") %>%ggplot() +geom_sf(color ="lightgrey") +geom_sf(aes(fill = overallscore), color =NA) +scale_fill_gradientn(colors =c("darkred", "orange", "yellow", "darkgreen"),values = scales::rescale(c(0, 0.25, 0.5, 1)),name ="Overall Score",na.value =NA ) +labs(title ="Mean overall SDG achievement score by country")+coord_sf(datum =NA) +theme_minimal()
Focus on the influence of events over the SDG scores
In order to have an overview of the relationship between the different events variables and the SDG overall score, we make several graphs containing the Pearson correlation coefficient between the variable, the scatter plots describing the relationship between the variables, as well as the distribution of each variable.
pairs(data_question3_2[,c("overallscore", "cases_per_million", "deaths_per_million", "stringency")], upper.panel=panel.cor, diag.panel=panel.hist, main="Correlation table and distribution of COVID variables")
The different variables used to materialize the impact of COVID19 do not seem to have important influence on the overall score, but we will further explore for the different SDGs, since we believe that COVID19 had a specific influence on some SDGs, for instance “good health and well-being” or “decent work and economic growth”.
pairs(data_question3_3[,c("overallscore", "ongoing", "sum_deaths", "pop_affected", "area_affected", "maxintensity")], upper.panel=panel.cor, diag.panel=panel.hist, main="Correlation table and distribution of conflicts variables")
The different variables used to materialize the impact of conflicts do not seem to have important influence on the overall score, but we will further explore for the different SDGs, since we believe that conflicts have a specific influence on some SDGs.
To explore our data on events such as disasters, covid-19 and conflicts we have to first see which countries are the most touched by these. To do so, we made time-series analysis on this three events each time depending on different variables.
# Converted 'year' column to date formatQ3.1$year <-as.Date(as.character(Q3.1$year), format ="%Y")Q3.2$year <-as.Date(as.character(Q3.2$year), format ="%Y")Q3.3$year <-as.Date(as.character(Q3.3$year), format ="%Y")
These is our time-analysis concerning the COVID-19 cases per million by region between end 2018 and 2022.
covid_filtered <- Q3.2[Q3.2$year >=as.Date("2018-12-12"), ]ggplot(data = covid_filtered, aes(x = year, y = cases_per_million, group = region, color = region)) +geom_smooth(method ="loess", se =FALSE, span =0.8, size =0.5) +labs(title ="Trend of COVID-19 Cases per Million Over Time",x ="Year", y ="Cases per Million") +facet_wrap(~ region, nrow =2) +theme_minimal() +theme(legend.position ="bottom") +guides(color =guide_legend(nrow =4))
These is our time-analysis concerning the COVID-19 deaths per million per region between end 2018 and 2022
ggplot(data = covid_filtered, aes(x = year, y = deaths_per_million, group = region, color = region)) +geom_smooth(method ="loess", se =FALSE, span =0.8, size =0.5) +labs(title ="Trend of COVID-19 Deaths per Million Over Time", x ="Year", y ="Deaths per Million") +facet_wrap(~ region, nrow =2) +theme_minimal() +theme(legend.position ="bottom") +guides(color =guide_legend(nrow =4))
These is our time-analysis concerning the COVID-19 stringency per region between end 2018 and 2022
ggplot(data = covid_filtered, aes(x = year, y = stringency, group = region, color = region)) +geom_smooth(method ="loess", se =FALSE, span =0.7, size =0.5) +labs(title ="Trend of COVID-19 Stringency Over Time", x ="Year", y ="Stringency") +facet_wrap(~ region, nrow =2) +theme_minimal() +theme(legend.position ="bottom") +guides(color =guide_legend(nrow =4))
These is our time-analysis concerning climatic disasters with total affected per region
Q3.1[is.na(Q3.1)] <-0ggplot(data = Q3.1, aes(x = year, y = total_affected, group = region, color = region)) +geom_smooth(method ="loess", se =FALSE, span =0.7, size =0.5) +labs(title ="Trend of Total Affected from Climatic Disasters Over Time", x ="Year", y ="Total Affected") +facet_wrap(~ region, nrow =2) +theme_minimal() +theme(legend.position ="bottom") +guides(color =guide_legend(nrow =4))
These is our time-analysis concerning conflicts deaths per region between 2000 and 2016
conflicts_filtered <- Q3.3[Q3.3$year >=as.Date("2000-01-01") & Q3.3$year <=as.Date("2016-12-31"), ]ggplot(data = conflicts_filtered, aes(x = year, y = sum_deaths, group = region, color = region)) +geom_smooth(method ="loess", se =FALSE, span =0.3, size =0.5) +# Using loess smoothing methodlabs(title ="Trend of Deaths by Conflicts Over Time", x ="Year", y ="Sum Deaths") +facet_wrap(~ region, nrow =2) +theme_minimal() +theme(legend.position ="bottom") +guides(color =guide_legend(nrow =4))
We can see that the regions’ the most affected by the conflicts are : Middle east and north Africa, Sub-Saharan Africa, South Asia, then less America & the Caribbean and Eastern Europe
These is our time-analysis concerning conflicts affected population per region between 2000 and 2016
ggplot(data = conflicts_filtered, aes(x = year, y = pop_affected, group = region, color = region)) +geom_smooth(method ="loess", se =FALSE, span =0.3, size =0.5) +# Using loess smoothing methodlabs(title ="Trend of Population Affected by Conflicts Over Time", x ="Year", y ="pop_affected") +facet_wrap(~ region, nrow =2) +theme_minimal() +theme(legend.position ="bottom") +guides(color =guide_legend(nrow =4))
We can see that the regions’ the most affected by the conflicts are : Middle east and north Africa, Sub-Saharan Africa, South Asia, America & the Caribbean, Eastern Europe ans sometimes Caucasus and Central Asia
Now that we could visualize which regions are the most impacted by these three events we can do correlations analysis per region to see if this events have indeed an impact on the evolution of SDG goals.
Here we want to analyse the correlation between the climate disasters and the SDG goals in South and East Asia.
cor_melted <-as.data.frame(as.table(correlation_matrix_disaster_Asia))names(cor_melted) <-c("Variable1", "Variable2", "Correlation")ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +geom_tile() +scale_fill_gradient2(low ="blue", high ="red", mid ="white",midpoint =0, limit =c(-1, 1), space ="Lab",name ="Correlation") +theme_minimal() +theme(axis.text.x =element_text(angle =45, vjust =1, size =8, hjust =1),axis.text.y =element_text(size =8)) +coord_fixed() +labs(x ='', y ='',title ='Correlation between the climate disasters and the SDG goals in South and East Asia')
We conclude that climate disasters do not really have a big impact on SDG goals.
Here we want to analyse the correlation between the Covid-19 and the SDG goals only during Covid time.
covid_filtered <- Q3.2[Q3.2$year >=as.Date("2019-01-01"), ]relevant_columns <-c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "stringency", "cases_per_million", "deaths_per_million")# Subset data with relevant columns for correlation analysisrelevant_data <- covid_filtered[, relevant_columns]correlation_matrix_Covid <-cor(relevant_data, use ="complete.obs")kable(correlation_matrix_Covid)
goal1
goal2
goal3
goal4
goal5
goal6
goal7
goal8
goal9
goal10
goal11
goal12
goal13
goal15
goal16
stringency
cases_per_million
deaths_per_million
goal1
1.000
0.534
0.867
0.777
0.445
0.763
0.798
0.584
0.781
0.497
0.727
-0.648
-0.553
0.099
0.714
0.056
0.341
0.361
goal2
0.534
1.000
0.560
0.541
0.469
0.605
0.469
0.636
0.569
0.240
0.463
-0.353
-0.284
0.122
0.451
0.088
0.206
0.242
goal3
0.867
0.560
1.000
0.829
0.641
0.836
0.845
0.693
0.881
0.456
0.828
-0.789
-0.669
0.152
0.825
0.040
0.412
0.373
goal4
0.777
0.541
0.829
1.000
0.656
0.764
0.803
0.596
0.773
0.309
0.758
-0.655
-0.558
0.058
0.674
0.113
0.349
0.339
goal5
0.445
0.469
0.641
0.656
1.000
0.663
0.606
0.587
0.645
0.098
0.690
-0.653
-0.564
0.203
0.628
0.060
0.330
0.261
goal6
0.763
0.605
0.836
0.764
0.663
1.000
0.765
0.711
0.811
0.366
0.766
-0.727
-0.583
0.262
0.729
0.069
0.389
0.398
goal7
0.798
0.469
0.845
0.803
0.606
0.765
1.000
0.556
0.740
0.323
0.793
-0.654
-0.494
0.123
0.697
0.055
0.340
0.374
goal8
0.584
0.636
0.693
0.596
0.587
0.711
0.556
1.000
0.695
0.387
0.587
-0.635
-0.556
0.283
0.627
0.024
0.356
0.278
goal9
0.781
0.569
0.881
0.773
0.645
0.811
0.740
0.695
1.000
0.462
0.753
-0.857
-0.760
0.189
0.819
0.074
0.460
0.353
goal10
0.497
0.240
0.456
0.309
0.098
0.366
0.323
0.387
0.462
1.000
0.281
-0.496
-0.469
0.215
0.519
-0.030
0.262
0.142
goal11
0.727
0.463
0.828
0.758
0.690
0.766
0.793
0.587
0.753
0.281
1.000
-0.696
-0.576
0.089
0.764
0.037
0.345
0.328
goal12
-0.648
-0.353
-0.789
-0.655
-0.653
-0.727
-0.654
-0.635
-0.857
-0.496
-0.696
1.000
0.876
-0.316
-0.825
0.013
-0.466
-0.292
goal13
-0.553
-0.284
-0.669
-0.558
-0.564
-0.583
-0.494
-0.556
-0.760
-0.469
-0.576
0.876
1.000
-0.205
-0.682
-0.018
-0.364
-0.166
goal15
0.099
0.122
0.152
0.058
0.203
0.262
0.123
0.283
0.189
0.215
0.089
-0.316
-0.205
1.000
0.303
-0.068
0.169
0.223
goal16
0.714
0.451
0.825
0.674
0.628
0.729
0.697
0.627
0.819
0.519
0.764
-0.825
-0.682
0.303
1.000
-0.023
0.425
0.316
stringency
0.056
0.088
0.040
0.113
0.060
0.069
0.055
0.024
0.074
-0.030
0.037
0.013
-0.018
-0.068
-0.023
1.000
0.041
0.336
cases_per_million
0.341
0.206
0.412
0.349
0.330
0.389
0.340
0.356
0.460
0.262
0.345
-0.466
-0.364
0.169
0.425
0.041
1.000
0.416
deaths_per_million
0.361
0.242
0.373
0.339
0.261
0.398
0.374
0.278
0.353
0.142
0.328
-0.292
-0.166
0.223
0.316
0.336
0.416
1.000
cor_melted <-as.data.frame(as.table(correlation_matrix_Covid))names(cor_melted) <-c("Variable1", "Variable2", "Correlation")ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +geom_tile() +scale_fill_gradient2(low ="blue", high ="red", mid ="white",midpoint =0, limit =c(-1, 1), space ="Lab",name ="Correlation") +theme_minimal() +theme(axis.text.x =element_text(angle =45, vjust =1, size =8, hjust =1),axis.text.y =element_text(size =8)) +coord_fixed() +labs(x ='', y ='',title ='Correlation between COVID and the SDG goals')
Same conclusion, really weird.
Here we want to analyse the correlation between conflicts deaths and the SDG goals only for the Middle East & North Africa, Sub-Saharan Africa, South Asia, Latin America & the Caribbean and Eastern Europe regions.
# Filter data for specific regionsselected_regions <-c("Middle East & North Africa", "Sub-Saharan Africa", "South Asia", "Latin America & the Caribbean", "Eastern Europe")conflicts_selected <- Q3.3[Q3.3$region %in% selected_regions, ]# Select relevant columns for the correlation analysisrelevant_columns <-c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "sum_deaths")# Compute correlation matrix for the selected regionscorrelation_matrix_Conflicts_Deaths <-cor(conflicts_selected[, relevant_columns], use ="complete.obs")# View the correlation matrixkable(correlation_matrix_Conflicts_Deaths)
goal1
goal2
goal3
goal4
goal5
goal6
goal7
goal8
goal9
goal10
goal11
goal12
goal13
goal15
goal16
sum_deaths
goal1
1.000
0.476
0.910
0.791
0.406
0.799
0.865
0.546
0.723
0.272
0.783
-0.730
-0.594
0.039
0.613
-0.095
goal2
0.476
1.000
0.544
0.531
0.540
0.638
0.531
0.571
0.530
0.102
0.475
-0.376
-0.322
0.154
0.430
-0.173
goal3
0.910
0.544
1.000
0.814
0.507
0.832
0.876
0.596
0.768
0.223
0.828
-0.745
-0.587
0.014
0.666
-0.117
goal4
0.791
0.531
0.814
1.000
0.645
0.748
0.808
0.536
0.696
0.089
0.768
-0.667
-0.533
0.007
0.496
-0.101
goal5
0.406
0.540
0.507
0.645
1.000
0.587
0.539
0.454
0.516
-0.178
0.620
-0.464
-0.351
0.191
0.384
-0.162
goal6
0.799
0.638
0.832
0.748
0.587
1.000
0.812
0.670
0.734
0.137
0.788
-0.711
-0.529
0.187
0.599
-0.166
goal7
0.865
0.531
0.876
0.808
0.539
0.812
1.000
0.539
0.720
0.152
0.841
-0.704
-0.531
0.039
0.566
-0.094
goal8
0.546
0.571
0.596
0.536
0.454
0.670
0.539
1.000
0.609
0.209
0.542
-0.519
-0.389
0.181
0.462
-0.102
goal9
0.723
0.530
0.768
0.696
0.516
0.734
0.720
0.609
1.000
0.300
0.698
-0.759
-0.689
0.137
0.591
-0.077
goal10
0.272
0.102
0.223
0.089
-0.178
0.137
0.152
0.209
0.300
1.000
0.035
-0.297
-0.299
0.118
0.275
0.078
goal11
0.783
0.475
0.828
0.768
0.620
0.788
0.841
0.542
0.698
0.035
1.000
-0.729
-0.570
0.031
0.656
-0.155
goal12
-0.730
-0.376
-0.745
-0.667
-0.464
-0.711
-0.704
-0.519
-0.759
-0.297
-0.729
1.000
0.865
-0.170
-0.666
0.122
goal13
-0.594
-0.322
-0.587
-0.533
-0.351
-0.529
-0.531
-0.389
-0.689
-0.299
-0.570
0.865
1.000
-0.150
-0.493
0.079
goal15
0.039
0.154
0.014
0.007
0.191
0.187
0.039
0.181
0.137
0.118
0.031
-0.170
-0.150
1.000
0.191
-0.063
goal16
0.613
0.430
0.666
0.496
0.384
0.599
0.566
0.462
0.591
0.275
0.656
-0.666
-0.493
0.191
1.000
-0.162
sum_deaths
-0.095
-0.173
-0.117
-0.101
-0.162
-0.166
-0.094
-0.102
-0.077
0.078
-0.155
0.122
0.079
-0.063
-0.162
1.000
# Melt the correlation matrix for ggplot2cor_melted <-as.data.frame(as.table(correlation_matrix_Conflicts_Deaths))names(cor_melted) <-c("Variable1", "Variable2", "Correlation")# Create the heatmapggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +geom_tile() +scale_fill_gradient2(low ="blue", high ="red", mid ="white",midpoint =0, limit =c(-1, 1), space ="Lab",name ="Correlation") +theme_minimal() +theme(axis.text.x =element_text(angle =45, vjust =1, size =8, hjust =1),axis.text.y =element_text(size =8)) +coord_fixed() +labs(x ='', y ='',title ='Correlation between Conflicts deaths and the SDG goals')
Finally, we want to analyse the correlation between conflicts affected population and the SDG goals only for the Middle East & North Africa, Sub-Saharan Africa, South Asia, Latin America & the Caribbean, Eastern Europe regions and Caucasus and Central Asia.
# Filter data for specific regions (pop_affected)selected_regions <-c("Middle East & North Africa", "Sub-Saharan Africa", "South Asia", "Latin America & the Caribbean", "Eastern Europe","Caucasus and Central Asia")conflicts_selected <- Q3.3[Q3.3$region %in% selected_regions, ]# Select relevant columns for the correlation analysisrelevant_columns <-c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "pop_affected")# Compute correlation matrix for the selected regionscorrelation_matrix_Conflicts_Pop_Affected <-cor(conflicts_selected[, relevant_columns], use ="complete.obs")# View the correlation matrixkable(correlation_matrix_Conflicts_Pop_Affected)
goal1
goal2
goal3
goal4
goal5
goal6
goal7
goal8
goal9
goal10
goal11
goal12
goal13
goal15
goal16
pop_affected
goal1
1.000
0.476
0.910
0.791
0.406
0.799
0.865
0.546
0.723
0.272
0.783
-0.730
-0.594
0.039
0.613
-0.066
goal2
0.476
1.000
0.544
0.531
0.540
0.638
0.531
0.571
0.530
0.102
0.475
-0.376
-0.322
0.154
0.430
-0.083
goal3
0.910
0.544
1.000
0.814
0.507
0.832
0.876
0.596
0.768
0.223
0.828
-0.745
-0.587
0.014
0.666
-0.058
goal4
0.791
0.531
0.814
1.000
0.645
0.748
0.808
0.536
0.696
0.089
0.768
-0.667
-0.533
0.007
0.496
-0.030
goal5
0.406
0.540
0.507
0.645
1.000
0.587
0.539
0.454
0.516
-0.178
0.620
-0.464
-0.351
0.191
0.384
-0.152
goal6
0.799
0.638
0.832
0.748
0.587
1.000
0.812
0.670
0.734
0.137
0.788
-0.711
-0.529
0.187
0.599
-0.106
goal7
0.865
0.531
0.876
0.808
0.539
0.812
1.000
0.539
0.720
0.152
0.841
-0.704
-0.531
0.039
0.566
-0.071
goal8
0.546
0.571
0.596
0.536
0.454
0.670
0.539
1.000
0.609
0.209
0.542
-0.519
-0.389
0.181
0.462
-0.099
goal9
0.723
0.530
0.768
0.696
0.516
0.734
0.720
0.609
1.000
0.300
0.698
-0.759
-0.689
0.137
0.591
0.000
goal10
0.272
0.102
0.223
0.089
-0.178
0.137
0.152
0.209
0.300
1.000
0.035
-0.297
-0.299
0.118
0.275
0.074
goal11
0.783
0.475
0.828
0.768
0.620
0.788
0.841
0.542
0.698
0.035
1.000
-0.729
-0.570
0.031
0.656
-0.103
goal12
-0.730
-0.376
-0.745
-0.667
-0.464
-0.711
-0.704
-0.519
-0.759
-0.297
-0.729
1.000
0.865
-0.170
-0.666
0.107
goal13
-0.594
-0.322
-0.587
-0.533
-0.351
-0.529
-0.531
-0.389
-0.689
-0.299
-0.570
0.865
1.000
-0.150
-0.493
0.021
goal15
0.039
0.154
0.014
0.007
0.191
0.187
0.039
0.181
0.137
0.118
0.031
-0.170
-0.150
1.000
0.191
-0.108
goal16
0.613
0.430
0.666
0.496
0.384
0.599
0.566
0.462
0.591
0.275
0.656
-0.666
-0.493
0.191
1.000
-0.099
pop_affected
-0.066
-0.083
-0.058
-0.030
-0.152
-0.106
-0.071
-0.099
0.000
0.074
-0.103
0.107
0.021
-0.108
-0.099
1.000
# Melt the correlation matrix for ggplot2cor_melted <-as.data.frame(as.table(correlation_matrix_Conflicts_Pop_Affected))names(cor_melted) <-c("Variable1", "Variable2", "Correlation")# Create the heatmapggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +geom_tile() +scale_fill_gradient2(low ="blue", high ="red", mid ="white",midpoint =0, limit =c(-1, 1), space ="Lab",name ="Correlation") +theme_minimal() +theme(axis.text.x =element_text(angle =45, vjust =1, size =8, hjust =1),axis.text.y =element_text(size =8)) +coord_fixed() +labs(x ='', y ='',title ='Correlation between Conflicts Affected Population and the SDG goals')
Focus on relationship between SDGs
How are the different SDGs linked? (We want to see if some SDGs are linked in the fact that a high score on one implies a high score on the other, and thus if we can make groups of SDGs that are comparable in that way).
Let’s explore how the different SDG are correlated together by creating a heatmap of the correlation between our variables. We also added a small script to check whether the correlations are significantly different from 0. First, let’s select the SDGs scores.
We then reshape our data to be able to use the package ggplot2 to create our heatmap.
melted_cor_matrix <-melt(cor_matrix)melted_p_matrix <-melt(matrix(as.vector(p_matrix), nrow =ncol(sdg_scores)))plot_data <-# Combine the datasetscbind(melted_cor_matrix, p_value = melted_p_matrix$value)ggplot(plot_data, aes(Var1, Var2, fill = value)) +geom_tile() +geom_text(aes(label =sprintf("%.2f", value), color = p_value <0.05),vjust =1) +scale_fill_gradient2(low ="blue", high ="red", mid ="white", midpoint =0, limit =c(-1,1), space ="Lab", name="Pearson\nCorrelation") +scale_color_manual(values =c("black", "yellow")) +# black when significant, yellow if nottheme_minimal() +theme(axis.text.x =element_text(angle =45, hjust =1),axis.text.y =element_text(angle =45, hjust =1),legend.position ="none") +labs(x ='SDG Goals', y ='SDG Goals',title ='Correlation Matrix with Significance Indicator')
Note that as said previously, we assessed the correlations to ascertain if they substantially deviated from zero, setting the significance level at an alpha of 5%. To aid in visualization, we marked any correlations that did not meet this level of significance with a yellow highlight in our graphical representation. The absence of yellow markings on our plot suggests that all Sustainable Development Goal (SDG) scores demonstrate a statistically significant correlation.
We can have a look at the shape of the corelation between the SDGs with the plot function.